Small sample sizes are a big issue, especially in neuroscience6
Academic journals suck
Journal status and paper quality are poorly correlated7,8
Space is dominated by 5 big publication houses - they are evil9
Strong bias against negative results
Do they spend their staggering profits checking for obvious signs of fraud, encourage replication or even just make science more readable and accessible? Of course not.10
Nice documentary on the business of scholarship here11
GUIs suck
Graphical user interface (GUI) tools like Excel, SPSS & Graphpad are very opaque and error prone, as our government learnt during COVID12
The Excel mistake heard around the world and the lasting economic repercussions13
Propriety software - many people can’t access it and therefore can’t replicate analysis
No obvious history of changes made or operations performed
Stats suck
Frequentist statistics is used almost exclusively for all science
It is extremely unintuitive and prone to abuse and is rarely done correctly in practise (p-hacking)4
Bayesian statistics is a fundamentally different approach, no ground truth assumptions so no pvalues and no p-hacking
Experimental designs suck
Positive control? pretty pls?
An investigator cannot guarantee that the claims made in a study are correct
Reproducibility is important not because it ensures that the results are correct, but rather because it ensures transparency and gives us confidence in understanding exactly what was done.
An example of a wee goof in the field
That time a major paper that supposedly discovered A\(\beta\)*56 a oligermer species, turned out to be full of image manipulations, whoops15